94 research outputs found
A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection
We propose a new space-variant anisotropic regularisation term for
variational image restoration, based on the statistical assumption that the
gradients of the target image distribute locally according to a bivariate
generalised Gaussian distribution. The highly flexible variational structure of
the corresponding regulariser encodes several free parameters which hold the
potential for faithfully modelling the local geometry in the image and
describing local orientation preferences. For an automatic estimation of such
parameters, we design a robust maximum likelihood approach and report results
on its reliability on synthetic data and natural images. For the numerical
solution of the corresponding image restoration model, we use an iterative
algorithm based on the Alternating Direction Method of Multipliers (ADMM). A
suitable preliminary variable splitting together with a novel result in
multivariate non-convex proximal calculus yield a very efficient minimisation
algorithm. Several numerical results showing significant quality-improvement of
the proposed model with respect to some related state-of-the-art competitors
are reported, in particular in terms of texture and detail preservation
Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal
We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind'' model selection
Recommended from our members
New PDE models for imaging problems and applications
Variational methods and Partial Differential Equations (PDEs) have been extensively employed for the mathematical formulation of a myriad of problems describing physical phenomena such as heat propagation, thermodynamic transformations and many more. In imaging, PDEs following variational principles are often considered. In their general form these models combine a regularisation and a data fitting term, balancing one against the other appropriately. Total variation (TV) regularisation is often used due to its edgepreserving and smoothing properties. In this thesis, we focus on the design of TV-based models for several different applications. We start considering PDE models encoding higher-order derivatives to overcome wellknown TV reconstruction drawbacks. Due to their high differential order and nonlinear nature, the computation of the numerical solution of these equations is often challenging. In this thesis, we propose directional splitting techniques and use Newton-type methods that despite these numerical hurdles render reliable and efficient computational schemes. Next, we discuss the problem of choosing the appropriate data fitting term in the case when multiple noise statistics in the data are present due, for instance, to different acquisition and transmission problems. We propose a novel variational model which encodes appropriately and consistently the different noise distributions in this case. Balancing the effect of the regularisation against the data fitting is also crucial. For this sake, we consider a learning approach which estimates the optimal ratio between the two by using training sets of examples via bilevel optimisation. Numerically, we use a combination of SemiSmooth (SSN) and quasi-Newton methods to solve the problem efficiently. Finally, we consider TV-based models in the framework of graphs for image segmentation problems. Here, spectral properties combined with matrix completion techniques are needed to overcome the computational limitations due to the large amount of image data. Further, a semi-supervised technique for the measurement of the segmented region by means of the Hough transform is proposed
Beyond sparse coding in V1
Growing evidence indicates that only a sparse subset from a pool of sensory
neurons is active for the encoding of visual stimuli at any instant in time.
Traditionally, to replicate such biological sparsity, generative models have
been using the norm as a penalty due to its convexity, which makes it
amenable to fast and simple algorithmic solvers. In this work, we use
biological vision as a test-bed and show that the soft thresholding operation
associated to the use of the norm is highly suboptimal compared to
other functions suited to approximating with
(including recently proposed Continuous Exact relaxations), both in terms of
performance and in the production of features that are akin to signatures of
the primary visual cortex. We show that sparsity produces a denser
code or employs a pool with more neurons, i.e. has a higher degree of
overcompleteness, in order to maintain the same reconstruction error as the
other methods considered. For all the penalty functions tested, a subset of the
neurons develop orientation selectivity similarly to V1 neurons. When their
code is sparse enough, the methods also develop receptive fields with varying
functionalities, another signature of V1. Compared to other methods, soft
thresholding achieves this level of sparsity at the expense of much degraded
reconstruction performance, that more likely than not is not acceptable in
biological vision. Our results indicate that V1 uses a sparsity inducing
regularization that is closer to the pseudo-norm rather than to the
norm
Parameter-Free FISTA by Adaptive Restart and Backtracking
We consider a combined restarting and adaptive backtracking strategy for the
popular Fast Iterative Shrinking-Thresholding Algorithm frequently employed for
accelerating the convergence speed of large-scale structured convex
optimization problems. Several variants of FISTA enjoy a provable linear
convergence rate for the function values of the form under the prior knowledge of problem conditioning, i.e.
of the ratio between the (\L ojasiewicz) parameter determining the growth
of the objective function and the Lipschitz constant of its smooth
component. These parameters are nonetheless hard to estimate in many practical
cases. Recent works address the problem by estimating either parameter via
suitable adaptive strategies. In our work both parameters can be estimated at
the same time by means of an algorithmic restarting scheme where, at each
restart, a non-monotone estimation of is performed. For this scheme,
theoretical convergence results are proved, showing that a convergence speed can still be achieved along with
quantitative estimates of the conditioning. The resulting Free-FISTA algorithm
is therefore parameter-free. Several numerical results are reported to confirm
the practical interest of its use in many exemplar problems
- …